perm filename LICKLI.LE3[LET,JMC] blob
sn#146368 filedate 1975-02-17 generic text, type C, neo UTF8
COMMENT ā VALID 00003 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 Dear Lick:
C00005 00003 Here are some of the stickier problems.
C00009 ENDMK
Cā;
Dear Lick:
I have studied your comments on our proposal, and there seems to
be quite a problem. Let me first mention the areas in which we can certainly
accomodate you, albeit with some work.
1. We can put general material defining the subject of AI in front
of the proposal and relegate technical arguments to appendices. The general
material won't be new, of course, and will resemble material that has
previously appeared in the literature and, to some extent, in earlier
proposals from Stanford. Not all of the technical material will have
to be moved - for example, the references to first order logic can be
explained. We could remove all the technical material, but some of the
reasons for doing one thing rather than another are technical, and the
reasons why a proposal from us is better than one from a random research
firm are often quite technical.
2. We can list accomplishments again and make the goals more
definite.
3. The relation between proving mathematical theorems and practical
AI can be made clearer. a. It's a practice domain with clear standards
of accomplishment. b. Mathematics is used to model real world situations
including those of defense interest and the extension of capability in
this direction can help insure increased objectivity - the past computer
models used by DoD often suffered from not taking important facts into
account and a more formal system will help with this.
Here are some of the stickier problems.
1. The Stanford AI Lab is pursuing a variety of approaches to AI,
because the major researchers in the Lab have varying points of view,
specifically McCarthy, Winograd, Green, Luckham, Binford and Weyhrauch
vary in the relative importance they assign to different activities and
approaches. There is not total dissonance but neither is it true that
everyone regards himself as fitting into a single grand plan with everyone
else playing an equally important role.
2. The motivation for discussing the limitations of the accomplishments
of AI were not to seem a hero when the problem as solved but rather to
justify tackling a problem less ambitious than problems people have
tackled in the past and which some people are tackling today. In my
opinion further progress toward really intelligent systems requires more
attention to formal reasoning than has been given in the past. Terry
is definitely pursuing a different approach and so is Cordell.
3. The problem of application of the results of AI research is
quite complex. In my opinion, many of the applications depend on first
doing some things that many people consider to pedestrian for scientific
attention. Examples:
a. AI can help with information retrieval, but getting the information
in files and providing present users with widespread cheap terminals suitably
networked comes first and will provide enormous payoff by itself.
b. Computer control of airplanes and weapons is very desirable, but
ordinary remote control should come first. The scientific problems for this
are substantially solved - only the initiative is lacking.
4. The central problems of AI are indeed difficult, and a long term
basic research program is necessary. It seems to me that DoD support is
justified provided sufficient intermediate results are obtained on a
reasonable time-scale, but the well will dry up if you try to get the
projects to produce only intermediate results.
5. If ARPA tries to get us to plan the what will be achieved in basic
research, it will succeed in getting the plans. However, the extent to which
the results correspond to the plans will be less than with applied work.
As with applied research, the progress will be slower than anticipated, because
the plans can only allow for those difficulties that have been identified.
Merely applying a deflation factor to the plans won't really work, because